Neural compression offers a domain-agnostic approach to creating codecs for lossy or lossless compression via deep generative models. For sequence compression, however, most deep sequence models have costs that scale with the sequence length rather than the sequence complexity. In this work, we instead treat data sequences as observations from an underlying continuous-time process and learn how to efficiently discretize while retaining information about the full sequence. As a consequence of decoupling sequential information from its temporal discretization, our approach allows for greater compression rates and smaller computational complexity. Moreover, the continuous-time approach naturally allows us to decode at different time intervals. We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.
translated by 谷歌翻译
Our situated environment is full of uncertainty and highly dynamic, thus hindering the widespread adoption of machine-led Intelligent Decision-Making (IDM) in real world scenarios. This means IDM should have the capability of continuously learning new skills and efficiently generalizing across wider applications. IDM benefits from any new approaches and theoretical breakthroughs that exhibit Artificial General Intelligence (AGI) breaking the barriers between tasks and applications. Recent research has well-examined neural architecture, Transformer, as a backbone foundation model and its generalization to various tasks, including computer vision, natural language processing, and reinforcement learning. We therefore argue that a foundation decision model (FDM) can be established by formulating various decision-making tasks as a sequence decoding task using the Transformer architecture; this would be a promising solution to advance the applications of IDM in more complex real world tasks. In this paper, we elaborate on how a foundation decision model improves the efficiency and generalization of IDM. We also discuss potential applications of a FDM in multi-agent game AI, production scheduling, and robotics tasks. Finally, through a case study, we demonstrate our realization of the FDM, DigitalBrain (DB1) with 1.2 billion parameters, which achieves human-level performance over 453 tasks, including text generation, images caption, video games playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 would be a baby step towards more autonomous and efficient real world IDM applications.
translated by 谷歌翻译
The understanding capabilities of current state-of-the-art 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories. In its 2D counterpart, recent advances have shown that similar problems can be significantly alleviated by employing knowledge from other modalities, such as language. Inspired by this, leveraging multimodal information for 3D modality could be promising to improve 3D understanding under the restricted data regime, but this line of research is not well studied. Therefore, we introduce ULIP to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modalities. To overcome the shortage of training triplets, ULIP leverages a pre-trained vision-language model that has already learned a common visual and textual space by training with massive image-text pairs. Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets. ULIP is agnostic to 3D backbone networks and can easily be integrated into any 3D architecture. Experiments show that ULIP effectively improves the performance of multiple recent 3D backbones by simply pre-training them on ShapeNet55 using our framework, achieving state-of-the-art performance in both standard 3D classification and zero-shot 3D classification on ModelNet40 and ScanObjectNN. ULIP also improves the performance of PointMLP by around 3% in 3D classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1 accuracy for zero-shot 3D classification on ModelNet40. Our code and pre-trained models will be released.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
当今,分会一代成为在线视频的实用技术。本章断点使用户能够快速找到所需的零件并获得总结注释。但是,没有公共方法和数据集用于此任务。为了促进该方向的研究,我们介绍了一个名为Chapter-gen的新数据集,该数据集由大约10K用户生成的视频和带注释的章节信息组成。我们的数据收集过程是快速,可扩展的,不需要任何其他手动注释。在此数据集之外,我们设计了一个有效的基线,专门针对视频章节生成任务。捕获视频的两个方面,包括视觉动态和叙述文本。它分别将本地和全球视频功能分别用于本地化和标题生成。为了有效地解析长时间的视频,Skip滑动窗口机构旨在定位潜在的章节。并且开发了交叉注意的多模式融合模块,以汇总标题生成的本地功能。我们的实验表明,所提出的框架比现有方法取得了优越的结果,这表明即使在微调后也无法直接传输类似任务的方法设计。代码和数据集可在https://github.com/czt117/mvcg上找到。
translated by 谷歌翻译
在本文中,我们解决了物体的主动机器人3D重建问题。特别是,我们研究了带有武器摄像机的移动机器人如何选择有利数量的视图来有效地恢复对象的3D形状。与现有的问题解决方案相反,我们利用了流行的神经辐射字段的对象表示,最近对各种计算机视觉任务显示了令人印象深刻的结果。但是,直接推荐使用这种表示形式的对象的显式3D几何细节,这并不是很直接的,这使得对密度3D重建的下一最佳视图选择问题具有挑战性。本文介绍了基于射线的容积不确定性估计器,该估计量沿对象隐式神经表示的每个光线沿每个射线的重量分布计算重量分布的熵。我们表明,考虑到提出的估计量的新观点,可以推断基础3D几何形状的不确定性。然后,我们提出了一个由基于射线的体积不确定性在基于神经辐射字段的表示中的指导下进行的最佳视图选择策略。令人鼓舞的关于合成和现实世界数据的实验结果表明,本文提出的方法可以使新的研究方向在机器人视觉应用中使用隐式的3D对象表示对次要的观察问题,从而将我们的方法与现有方法区分开依赖于显式3D几何建模的方法。
translated by 谷歌翻译
图形神经网络(GNNS)可以使用深度学习对图进行分析,并在图中捕获结构化信息的结果有希望的结果。本文着重于创建一个小图来表示原始图,以便在尺寸降低的图上训练的GNN可以做出准确的预测。我们将原始图视为接收场的分布,并旨在合成一个小图,其接受场具有相似的分布。因此,我们通过接受场分布匹配(GCDM)提出了图形屈服,该图是通过使用最大平均差异(MMD)量化的分布匹配损耗来优化合成图来完成的。此外,我们证明了GCDM生成的合成图在评估阶段高度概括为各种模型,并且使用此框架可显着提高冷凝速度。
translated by 谷歌翻译
对话状态跟踪器是为了跟踪对话中用户目标的设计,是对话系统中的重要组成部分。但是,对话状态跟踪的研究在很大程度上仅限于单形式,其中插槽和老虎机值受知识领域(例如带有餐厅名称和价格范围插槽的餐厅域)的限制,并且由特定的数据库架构定义。在本文中,我们建议将对话状态跟踪的定义扩展到多模式。具体来说,我们介绍了一项新颖的对话状态跟踪任务,以跟踪视频接地对话中提到的视觉对象的信息。每个新的对话说法都可能引入一个新的视频段,新的视觉对象或新对象属性,并且需要一个状态跟踪器来相应地更新这些信息插槽。我们创建了一个新的合成基准测试,并为此任务设计了一个新颖的基线视频 - 底盘变压器网络(VDTN)。 VDTN结合了对象级功能和段级功能,并学习视频和对话之间的上下文依赖性,以生成多模式对话状态。我们为国家生成任务以及一个自我监督的视频理解任务优化了VDTN,该任务恢复了视频段或对象表示。最后,我们培训了VDTN在响应预测任务中使用解码状态。加上全面的消融和定性分析,我们发现了一些有趣的见解,以建立更有能力的多模式对话系统。
translated by 谷歌翻译
许多基本的室内活动,例如饮食或写作,总是在不同的桌面上(例如咖啡桌,写桌)进行。在3D室内场景解析应用程序中了解桌面场景是必不可少的。不幸的是,由于3D桌面场景在当前数据集中很少可用,因此很难通过直接部署数据驱动算法来满足这一需求。为了解决此缺陷,我们介绍了To-Scene,这是一个专注于桌面场景的大规模数据集,其中包含20,740个带有三个变体的场景。为了获取数据,我们设计了一个高效且可扩展的框架,在该框架中开发了众包UI将CAD对象从模型网和Shapenet传递到扫描室的桌子上,然后将输出桌面场景模拟为真实的扫描并自动注释。此外,提出了一种桌面吸引的学习策略,以更好地感知小型桌面实例。值得注意的是,我们还提供了真正的扫描测试集,以验证待机的实际价值。实验表明,经过训练的to-Scene的算法确实在现实的测试数据上工作,而我们提出的桌面感知学习策略极大地改善了3D语义细分和对象检测任务的最新结果。数据集和代码可在https://github.com/gap-lab-cuhk-sz/to-scene上找到。
translated by 谷歌翻译